Introduction to Open Data Science - Course Project

About the Course

I heard about the course from an email send by Prof. Kimmo, himself, and decided to join. So thanks Kimmo. I am so excited to learn about all things Data Science.

Expect to learn

I have used R and Rstudio for about three years now. However, I have not yet tried out GitHub. I thought this would be a good chance and refresh my analytics skills.

A curious question

As I am a regular R user, I guess things like

  • Import and read data
  • Data visualization
  • Run linear models
  • Explore data

Would be familiar. I was wondering if, during the course, we will also explore writing R packages.


Regression and model validation

This chapter focuses on performing and interpreting regression analysis.

Read the students2014 data

The data is from an international survey of Approaches to Learning, made possible by Teachers’ Academy funding for KV in 2013-2015. The data has been filtered to include the desirable variables for analysis. The original data and variables descriptions can be found here.

data_analysis <- read.csv("learning2014.csv") # Read data from my local folder
str(data_analysis) # The data structure is data frame.
## 'data.frame':    166 obs. of  7 variables:
##  $ gender  : chr  "F" "M" "F" "M" ...
##  $ Age     : int  53 55 49 53 49 38 50 37 37 42 ...
##  $ attitude: num  3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
##  $ deep    : num  3.58 2.92 3.5 3.5 3.67 ...
##  $ stra    : num  3.38 2.75 3.62 3.12 3.62 ...
##  $ surf    : num  2.58 3.17 2.25 2.25 2.83 ...
##  $ Points  : int  25 12 24 10 22 21 21 31 24 26 ...
dim(data_analysis) # The data contains 166 observations or rows and 7 variables or columns.
## [1] 166   7
head(data_analysis, n = 3) # Three first rows
##   gender Age attitude     deep  stra     surf Points
## 1      F  53      3.7 3.583333 3.375 2.583333     25
## 2      M  55      3.1 2.916667 2.750 3.166667     12
## 3      F  49      2.5 3.500000 3.625 2.250000     24
tail(data_analysis, n = 3) # Three last rows
##     gender Age attitude     deep  stra     surf Points
## 164      F  18      3.7 3.166667 2.625 3.416667     18
## 165      F  19      3.6 3.416667 2.625 3.000000     30
## 166      M  21      1.8 4.083333 3.375 2.666667     19
  • The attitude column is a summary of the Attitude column, which is a sum of 10 questions related to students attitude towards statistics, each measured on the scale (1-5).
  • The “deep” column summarizes the “D” measures. Precisely, D03, D11, D19, D27, D07, D14, D22, D30,D06, D15, D23, D31.
  • The “surf” column summarizes the “SU” measures. Precisely, SU02, SU10, SU18, SU26, SU05, SU13, SU21, SU29, SU08, SU16, SU24, SU32.
  • The “stra” column summarizes the “ST” measures. Precisely, ST01, ST09, ST17, ST25, ST04, ST12, ST20, ST28.

A graphical overview of the data and summary of the variables

library(GGally) # Access the GGally library
library(ggplot2) # Access the ggplot2 library
# create plot matrix with ggpairs() by gender.
ggpairs(data_analysis, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))    

Interpretations:

  • There seem to be more females students than males, as it is shown form the gender bar plot.
  • The median age of males students is a little higher than females. On the other hand, the median point is somehow the same for males and females.
  • The “attitude” variable is positively correlated with the “Points” variable for both females and males. Same observation with the “stra” variable.
  • The rest of the variables are negatively correlated with the “Points” variable.
  • The “Age” variable is left-skewed. The other variables look somehow normal distributed with more than one mode.
summary(data_analysis) #  summary of the variables
##     gender               Age           attitude          deep      
##  Length:166         Min.   :17.00   Min.   :1.400   Min.   :1.583  
##  Class :character   1st Qu.:21.00   1st Qu.:2.600   1st Qu.:3.333  
##  Mode  :character   Median :22.00   Median :3.200   Median :3.667  
##                     Mean   :25.51   Mean   :3.143   Mean   :3.680  
##                     3rd Qu.:27.00   3rd Qu.:3.700   3rd Qu.:4.083  
##                     Max.   :55.00   Max.   :5.000   Max.   :4.917  
##       stra            surf           Points     
##  Min.   :1.250   Min.   :1.583   Min.   : 7.00  
##  1st Qu.:2.625   1st Qu.:2.417   1st Qu.:19.00  
##  Median :3.188   Median :2.833   Median :23.00  
##  Mean   :3.121   Mean   :2.787   Mean   :22.72  
##  3rd Qu.:3.625   3rd Qu.:3.167   3rd Qu.:27.75  
##  Max.   :5.000   Max.   :4.333   Max.   :33.00

Interpretations:

  • The students’ average age is 25 years, the youngest student is 17 years, while the oldest is 55 years.
  • The average exam point is 22.72, the lowest point is 7, while the highest point is 33.
  • The global attitude toward statistics, on average, is about 3.143 over 5.

Fit a regression model

For the regression model, the three chosen explanatory variables are attitude, stra, and surf. Their choice is based on the correlation analysis conducted on the step above. As it can be observed, the three variables are correlated with the response variable Points.

# Fit a multiple linear regression model using the lm() function
model1 <- lm(Points ~ attitude + stra + surf,
             data = data_analysis)
# Summary of the fitted model
summary(model1)
## 
## Call:
## lm(formula = Points ~ attitude + stra + surf, data = data_analysis)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.1550  -3.4346   0.5156   3.6401  10.8952 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  11.0171     3.6837   2.991  0.00322 ** 
## attitude      3.3952     0.5741   5.913 1.93e-08 ***
## stra          0.8531     0.5416   1.575  0.11716    
## surf         -0.5861     0.8014  -0.731  0.46563    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared:  0.2074, Adjusted R-squared:  0.1927 
## F-statistic: 14.13 on 3 and 162 DF,  p-value: 3.156e-08

Interpretations: Both stra and surf variables are not statistically significant as their p-values are higher than 0.05 at 5% level of significance; the usually used significance level. Let us remove the variables one by one, starting from surf, and refit the model.

# Fit a new multiple linear regression model without the surf variable
model2 <- lm(Points ~ attitude + stra,
             data = data_analysis)
# Summary of the new fitted model
summary(model2)
## 
## Call:
## lm(formula = Points ~ attitude + stra, data = data_analysis)
## 
## Residuals:
##      Min       1Q   Median       3Q      Max 
## -17.6436  -3.3113   0.5575   3.7928  10.9295 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)   8.9729     2.3959   3.745  0.00025 ***
## attitude      3.4658     0.5652   6.132 6.31e-09 ***
## stra          0.9137     0.5345   1.709  0.08927 .  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 5.289 on 163 degrees of freedom
## Multiple R-squared:  0.2048, Adjusted R-squared:  0.1951 
## F-statistic: 20.99 on 2 and 163 DF,  p-value: 7.734e-09

Interpretations: In the new fitted model, the remaining variables seem to be statistically significant, at least up to 10% level of significance for the stra variable.

Summary and model interpretations

In linear regression, the model interpretation depends on the used functional form. In this case, the used functional form is the linear-linear one, which means there was no log-transformation of the data.

Therefore, the interpretation goes as follows "One unit increase in x[explanatory variable] results in Beta_1 (the estimated parameter) unit increase in y[response variable]. Additionally, as the case is a multiple linear regression, for the interpretation, one must hold other factors fixed and interpret one variable at a time.

Interpretations:

  • By holding other factors fixed, an increase of one unit in the global attitude toward statistics results in 3.47 unit increase in the exam points.
  • By holding other factors fixed, an increase of one unit in stra results in 0.91 unit increase in the exam points. Recall that stra variable is a summary of different measures.

The R-squared is used to quantify how well the model fits the data. In simple linear regression, it is the quotient between the Sum Squared Explained (SSE) over Sum Squared Total (SST). The reason why, in multiple linear regression, it is recommended to assess the model fitting using the adjusted R squared, which take into accounts the number of the fitted parameters.

Interpretation: In the case of model2, Adjusted R squared = 0.1951, reflecting that the model explained 19.5% of the data. The higher the Adjusted R-squared, the better the model.

Diagnostic plots

# Plot the diagnostics plots: Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage
par(mfrow = c(2,2)) # Divide the window into a 2-by-2 sub-windows. 
plot(model2, which = c(1,2,5))

  • The Q-Q plot: The plot is used to diagnose the normality assumption of the linear regression model. If most of the points lay on the line, it indicates that the assumption is justified.

Interpretation: most of the residuals points seem to lay on the line; the normality assumption is justified.

  • The Residuals vs Fitted values plot: The plot is used to quantify a the constant variance assumption. There has to be no pattern in the plot, which indicates that the size of the errors should not depend on the explanatory variables.

Interpretation: The residuals seem to be randomly distributed with respect to the fitted values; no pattern is observed; the assumption is justified.

  • The Residuals vs Leverage plot: The plot is used to investigate whether there is an outlier observation which can influence the outcome of the model.

Interpretation: No stand-up outlier is observed.


Logistic regression

This chapter focuses on performing and interpreting logistic regression analysis.

Read the joined student alcohol consumption data

The data are from two identical questionnaires related to secondary school student alcohol consumption in Portugal. Data source: UCI Machine Learning Repository. Metadata available here.

data_joined <- read.csv("pormath.csv") # Read data from my local folder
str(data_joined) # The data structure is data frame.
## 'data.frame':    370 obs. of  35 variables:
##  $ school    : chr  "GP" "GP" "GP" "GP" ...
##  $ sex       : chr  "F" "F" "F" "F" ...
##  $ age       : int  15 15 15 15 15 15 15 15 15 15 ...
##  $ address   : chr  "R" "R" "R" "R" ...
##  $ famsize   : chr  "GT3" "GT3" "GT3" "GT3" ...
##  $ Pstatus   : chr  "T" "T" "T" "T" ...
##  $ Medu      : int  1 1 2 2 3 3 3 2 3 3 ...
##  $ Fedu      : int  1 1 2 4 3 4 4 2 1 3 ...
##  $ Mjob      : chr  "at_home" "other" "at_home" "services" ...
##  $ Fjob      : chr  "other" "other" "other" "health" ...
##  $ reason    : chr  "home" "reputation" "reputation" "course" ...
##  $ nursery   : chr  "yes" "no" "yes" "yes" ...
##  $ internet  : chr  "yes" "yes" "no" "yes" ...
##  $ alc_use   : num  1 3 1 1 2.5 1 2 2 2.5 1 ...
##  $ high_use  : logi  FALSE TRUE FALSE FALSE TRUE FALSE ...
##  $ guardian  : chr  "mother" "mother" "mother" "mother" ...
##  $ traveltime: int  2 1 1 1 2 1 2 2 2 1 ...
##  $ studytime : int  4 2 1 3 3 3 3 2 4 4 ...
##  $ schoolsup : chr  "yes" "yes" "yes" "yes" ...
##  $ famsup    : chr  "yes" "yes" "yes" "yes" ...
##  $ activities: chr  "yes" "no" "yes" "yes" ...
##  $ higher    : chr  "yes" "yes" "yes" "yes" ...
##  $ romantic  : chr  "no" "yes" "no" "no" ...
##  $ famrel    : int  3 3 4 4 4 4 4 4 4 4 ...
##  $ freetime  : int  1 3 3 3 2 3 2 1 4 3 ...
##  $ health    : int  1 5 2 5 3 5 5 4 3 4 ...
##  $ goout     : int  2 4 1 2 1 2 2 3 2 3 ...
##  $ Walc      : int  1 4 1 1 3 1 2 3 3 1 ...
##  $ Dalc      : int  1 2 1 1 2 1 2 1 2 1 ...
##  $ failures  : int  0 1 0 0 1 0 1 0 0 0 ...
##  $ paid      : chr  "yes" "no" "no" "no" ...
##  $ absences  : int  3 2 8 2 5 2 0 1 9 10 ...
##  $ G1        : int  10 10 14 10 12 12 11 10 16 10 ...
##  $ G2        : int  12 8 13 10 12 12 6 10 16 10 ...
##  $ G3        : int  12 8 12 9 12 12 6 10 16 10 ...
dim(data_joined) # The data contains 370 observations or rows and 35 variables or columns.
## [1] 370  35
colnames(data_joined) # variables names
##  [1] "school"     "sex"        "age"        "address"    "famsize"   
##  [6] "Pstatus"    "Medu"       "Fedu"       "Mjob"       "Fjob"      
## [11] "reason"     "nursery"    "internet"   "alc_use"    "high_use"  
## [16] "guardian"   "traveltime" "studytime"  "schoolsup"  "famsup"    
## [21] "activities" "higher"     "romantic"   "famrel"     "freetime"  
## [26] "health"     "goout"      "Walc"       "Dalc"       "failures"  
## [31] "paid"       "absences"   "G1"         "G2"         "G3"

Four interesting variables in the data

The chosen variables are: Student grades(G3), sex, absences, and failures.

The hypotheses are:

  • H1: A higher alcohol use is associated with lower grades.
  • H2: A higher alcohol use is associated with sex.
  • H3: A higher alcohol use is associated with student absences.
  • H4: A higher alcohol use is associated with student failures.

Explore the distributions of the chosen variables

Student grades(G3) and Sex relationships with alcohol consumption

library(dplyr) # Access the dpyr library
library(ggplot2) # Access the ggplot2 library
data_joined %>%   # produce summary statistics by group
  group_by(sex, high_use) %>%
  summarise(count = n(), mean_grade = mean(G3))
## # A tibble: 4 x 4
## # Groups:   sex [2]
##   sex   high_use count mean_grade
##   <chr> <lgl>    <int>      <dbl>
## 1 F     FALSE      154       11.4
## 2 F     TRUE        41       11.8
## 3 M     FALSE      105       12.3
## 4 M     TRUE        70       10.3
ggplot(data_joined, aes(x = high_use, y = G3, col = sex)) + geom_boxplot() + ylab("grade") + ggtitle("Student grades by alcohol consumption and sex")

Comments:

  • 41 females students over 195 (i.e. 21%) are higher alcohol users.
  • 70 males students over 175 (i.e. 40%) are higher alcohol users. With these two comments, one can argue that H2 is satisfied.
  • On average, males alcohol user tend to have lower grades (2 points less than non-alcohol users).
  • For females, grades point average is quite the same for alcohol and non-alcohol users.
  • Looking at the boxplot, the non-alcohol user students have high grades. Especially, males, their median grade point is higher than females.
  • Higher alcohol user students tend to have lower grades. One can argue that H1 is satisfied.
  • For higher alcohol users, females tend to have a high median grade point than males.

Student absences relationships with alcohol consumption

ggplot(data_joined, aes(x = high_use, y = absences, col = sex)) + geom_boxplot() + ggtitle("Student absences by alcohol consumption and sex")

Comments:

  • High alcohol user students are more absent than non-alcohol users (H3 satisfied). And, the number of males students absent in this category is slightly higher than females.
  • For non-alcohol user students, females are more absent than males. However, their medium absent days are quite the same, for males and females.

Student failures relationships with alcohol consumption

ggplot(data_joined, aes(failures, fill = high_use)) + geom_bar(position = "dodge") + ggtitle("Student failures by alcohol consumption")

Comments:

  • For students who failed, got “0”, the counted number seems to be high for non-alcohol user. This comment suggests that H4 is not satisfied.
  • For those students who got “1” and “2”, the counted number seems to be the same for higher alcohol user and non-alcohol user.
  • For those students who got “3”, a slightly higher counted number seems to be higher alcohol user; again, suggesting that H4 is not satisfied.

Logistic regression analysis

This analysis explores the relationship between the four chosen variables and the binary high/low alcohol consumption variable as the target variable.

# Fit a logistic regression model using the glm() function
glm_model1 <- glm(high_use ~ G3 + failures + absences + sex, data = data_joined, family = "binomial")
# Summary of the fitted model
summary(glm_model1)
## 
## Call:
## glm(formula = high_use ~ G3 + failures + absences + sex, family = "binomial", 
##     data = data_joined)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.1561  -0.8429  -0.5872   1.0033   2.1393  
## 
## Coefficients:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept) -1.38733    0.51617  -2.688  0.00719 ** 
## G3          -0.04671    0.03948  -1.183  0.23671    
## failures     0.50382    0.22018   2.288  0.02213 *  
## absences     0.09058    0.02322   3.901 9.56e-05 ***
## sexM         1.00870    0.24798   4.068 4.75e-05 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 452.04  on 369  degrees of freedom
## Residual deviance: 405.59  on 365  degrees of freedom
## AIC: 415.59
## 
## Number of Fisher Scoring iterations: 4
# Coefficients of the model
coef(glm_model1)
## (Intercept)          G3    failures    absences        sexM 
## -1.38732604 -0.04670941  0.50382370  0.09057516  1.00870144

Interpretations:

  • The student grades (G3) variable is not statistically significant; thus, it will be removed from the model. Hence, H1 is not satisfied
  • As the sex variables is a factor variable, the “female” part of the variable is the baseline, and its coefficient is simply the intercept.
  • The true coefficient of the “male” student would be -1.38733 + 1.00870 = -0.37863.

Let us fit the model without G3 variable and an intercept to see all coefficients directly.

# Fit a new logistic regression model 
glm_model2 <- glm(high_use ~ failures + absences + sex - 1, data = data_joined, family = "binomial")
# Summary of the fitted model
summary(glm_model2)
## 
## Call:
## glm(formula = high_use ~ failures + absences + sex - 1, family = "binomial", 
##     data = data_joined)
## 
## Deviance Residuals: 
##     Min       1Q   Median       3Q      Max  
## -2.1550  -0.8430  -0.5889   1.0328   2.0374  
## 
## Coefficients:
##          Estimate Std. Error z value Pr(>|z|)    
## failures  0.59759    0.20698   2.887  0.00389 ** 
## absences  0.09245    0.02323   3.979 6.91e-05 ***
## sexF     -1.94150    0.23129  -8.394  < 2e-16 ***
## sexM     -0.94418    0.19305  -4.891 1.00e-06 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## (Dispersion parameter for binomial family taken to be 1)
## 
##     Null deviance: 512.93  on 370  degrees of freedom
## Residual deviance: 406.99  on 366  degrees of freedom
## AIC: 414.99
## 
## Number of Fisher Scoring iterations: 4
# Coefficients of the model
coef(glm_model2)
##    failures    absences        sexF        sexM 
##  0.59759293  0.09245138 -1.94149584 -0.94418265
# compute odds ratios (OR)
OR <- coef(glm_model2) %>% exp

# compute confidence intervals (CI)
CI <- confint(glm_model2) %>% exp

# print out the odds ratios with their confidence intervals
cbind(OR, CI)
##                 OR      2.5 %    97.5 %
## failures 1.8177381 1.21997630 2.7642305
## absences 1.0968598 1.05041937 1.1508026
## sexF     0.1434892 0.08940913 0.2219242
## sexM     0.3889974 0.26411123 0.5638009

Interpretations:

  • All coefficients are statistically significant. All variables are highly significant predictors of the probability of high alcohol consumption in students.
  • Also, the model accuracy has improved. In terms of Information criteria, the model1 had an AIC = 415.59, while model2 has an AIC = 414.99; thus, model2 is the preferable one.
  • The odds of failures for a higher user alcohol students is estimated to 1.817 be with 95 CI of [1.219, 2.764]. Note: The interval contains 1, i.e the increase in the odds of being a higher alcohol user student associated with a 1 unit increase failures is estimated to be between 21% and 170%.
  • The odds of absences for a higher user alcohol students is estimated to 1.096 be with 95 CI of [1.050, 1.150]. Note: The interval contains 1, i.e the increase in the odds of being a higher alcohol user student associated with a 1 unit increase absences is estimated to be between 5% and 15%.
  • The odd of a male student being a higher alcohol user is estimated to be 0.388 with 95 CI of [0.264, 0.563]. i.e it is between 26% and 56% of the corresponding odds for a female student.
  • The odd of a female student being a higher alcohol user is estimated to be 0.143 with 95 CI of [0.089, 0.221]. i.e it is between 8% and 22% of the corresponding odds for a female student.

Explore the predictive power of the model

# predict() the probability of high_use
probabilities <- predict(glm_model2, type = "response")

# add the predicted probabilities to 'data_joined'
data_joined <- mutate(data_joined, probability = probabilities)

# use the probabilities to make a prediction of high_use
data_joined <- mutate(data_joined, prediction = probability > 0.5)

# see the first ten original classes, predicted probabilities, and class predictions
select(data_joined, failures, absences, sex, high_use, probability, prediction) %>% head(10)
##    failures absences sex high_use probability prediction
## 1         0        3   F    FALSE   0.1592068      FALSE
## 2         1        2   F     TRUE   0.2388490      FALSE
## 3         0        8   F    FALSE   0.2311401      FALSE
## 4         0        2   F    FALSE   0.1472175      FALSE
## 5         1        5   F     TRUE   0.2928368      FALSE
## 6         0        2   F    FALSE   0.1472175      FALSE
## 7         1        0   F    FALSE   0.2068690      FALSE
## 8         0        1   F    FALSE   0.1359851      FALSE
## 9         0        9   F     TRUE   0.2479765      FALSE
## 10        0       10   F    FALSE   0.2656157      FALSE
# tabulate the target variable versus the predictions
table(high_use = data_joined$high_use, 
       prediction = data_joined$prediction)
##         prediction
## high_use FALSE TRUE
##    FALSE   252    7
##    TRUE     78   33

Comments:

  • They are 252 true negatives and 33 true positives, i.e the model was able to explain 252 over 300 FALSEs, and 33 over 40 TRUEs.
  • 7 false negatives and 78 false positives, i.e the model missed 7 over 40 TRUEs and 78 over 300 FALSEs.

Let us check the accuracy and loss functions of the model

# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
  n_wrong <- abs(class - prob) > 0.5
  mean(n_wrong)
}

# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = data_joined$high_use, prob = data_joined$probability)
## [1] 0.2297297

Comments: The mean of incorrectly classified observations can be thought of as a penalty (loss) function for the classifier. Less penalty = good. The aim is to minimize the incorrectly classified observations. Model2 has a mean prediction error of about 23%.

Bonus: Perform 10-fold cross-validation on the model

library(boot)
cv <- cv.glm(data = data_joined, cost = loss_func, glmfit = glm_model2, K = 10)

# average number of wrong predictions in the cross validation
cv$delta[1]
## [1] 0.2351351

Comments: Yes, model2 has a small prediction error (0.23 error) compared to the model introduced in DataCamp (0.26 error).


Clustering and classification

This chapter focuses on performing and interpreting clustering and classification on the Boston data set.

Load the Boston data set

The data are for the housing values in suburbs of Boston. The data are available from the MASS package and the variable descriptions can be found here.

library(MASS)
data("Boston") # load the data
str(Boston) # A data frame
## 'data.frame':    506 obs. of  14 variables:
##  $ crim   : num  0.00632 0.02731 0.02729 0.03237 0.06905 ...
##  $ zn     : num  18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
##  $ indus  : num  2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
##  $ chas   : int  0 0 0 0 0 0 0 0 0 0 ...
##  $ nox    : num  0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
##  $ rm     : num  6.58 6.42 7.18 7 7.15 ...
##  $ age    : num  65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
##  $ dis    : num  4.09 4.97 4.97 6.06 6.06 ...
##  $ rad    : int  1 2 2 3 3 3 5 5 5 5 ...
##  $ tax    : num  296 242 242 222 222 222 311 311 311 311 ...
##  $ ptratio: num  15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
##  $ black  : num  397 397 393 395 397 ...
##  $ lstat  : num  4.98 9.14 4.03 2.94 5.33 ...
##  $ medv   : num  24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston) 
## [1] 506  14

Comment: The data frame has 506 rows and 14 columns. All variables are numeric, with the variable chas: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise).

A graphical overview of the data and summary of the variables

Matrix plot of the variables

# plot matrix of the variables
pairs(Boston,
      col = "blue", # Change color
      pch = 18,    # Change shape of points
      main = "Matrix plot of the variables") # Add a main title

The upper correlation matrix

library(corrplot) # access the corrplot library
# visualize the upper correlation matrix
corrplot(cor_matrix, method="circle", type = "upper")

Interpretations:

  • There seems to be a positive correlation between per capita crime rate by town (crim) and the index of accessibility to radial highways (rad) and also full-value property-tax rate per $10,000 (tax).

  • A slightly positive correlation between per capita crime rate by town (crim) with proportion of non-retail business acres per town (indus), nitrogen oxides concentration (parts per 10 million) (nox), and lower status of the population (percent) (lstat).

  • A positive correlation between the proportion of residential land zoned for lots over 25,000 square feet (zn) and the weighted mean of distances to five Boston employment centres (dis).

  • A positive correlation between the proportion of non-retail business acres per town (indus) with nitrogen oxides concentration (parts per 10 million) (nox), proportion of owner-occupied units built prior to 1940 (age), index of accessibility to radial highways (rad), full-value property-tax rate per $10,000 (tax), and lower status of the population (percent) (lstat).

  • A positive correlation between average number of rooms per dwelling (rm) with median value of owner-occupied homes in $1000s (medv).

  • A negative correlation between: lower status of the population (percent) (lstat) and median value of owner-occupied homes in $1000s (medv).

  • Moreover, three variables are negatively correlated with the weighted mean of distances to five Boston employment centres (dis). Those are proportion of owner-occupied units built prior to 1940 (age), nitrogen oxides concentration (parts per 10 million) (nox), and proportion of non-retail business acres per town (indus).

Summary of the variables

summary(Boston) 
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

Interpretations: On average,

  • The per capital crime rate by the town is about 3.61.
  • The proportion of residential land zoned for lots over 25,000 square feet is about 11.36.
  • The proportion of non-retail business acres per town is about 11.14.
  • The nitrogen oxides concentration (parts per 10 million) is about 0.55.
  • The average number of rooms per dwelling is about 6.
  • The proportion of owner-occupied units built prior to 1940 is about 68.57.
  • The full-value property-tax rate per $10,000 is about 408.2.
  • The pupil-teacher ratio by town is about 18.46.
  • The median value of owner-occupied homes in $1000s is about 22.53.
  • The chas Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). Its mean is about 0.069.

Standardize the dataset

# center and standardize variables
boston_scaled <- scale(Boston)

# change the object to data frame
boston_scaled <- as.data.frame(boston_scaled)

# summaries of the scaled variables
summary(boston_scaled)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

How did the variables change?: The variables have been rescaled to have a mean of zero and a standard deviation of one. For a standardized variable, each case’s value on the standardized variable indicates it’s difference from the mean of the original variable in number of standard deviations (of the original variable).

# Create a categorical variable of the crime rate

# create a quantile vector of crim and print it
bins <- quantile(boston_scaled$crim)

# create a categorical variable 'crime'
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))

# look at the table of the new factor crime
table(crime)
## crime
##      low  med_low med_high     high 
##      127      126      126      127
# Drop the old crime rate variable from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)

# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)

# Divide the dataset to train and test sets

# number of rows in the Boston dataset 
n <- nrow(boston_scaled)

# choose randomly 80% of the rows
ind <- sample(n,  size = n * 0.8)

# create train set
train <- boston_scaled[ind,]

# create test set 
test <- boston_scaled[-ind,]

# save the correct classes from test data
correct_classes <- test$crime

# remove the crime variable from test data
test <- dplyr::select(test, -crime)

Fit the linear discriminant analysis on the train set

# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)

# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(train$crime)

# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1.3)

Prediction of the classes with the LDA model on the test data

# Save the crime categories from the test set and then remove the categorical crime variable from the test dataset. DONE -- See above steps.

# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)

# cross tabulate the results
tbl_lda <- table(correct = correct_classes, predicted = lda.pred$class)
tbl_lda; rowSums(tbl_lda)
##           predicted
## correct    low med_low med_high high
##   low       10      10        4    0
##   med_low   10       9        7    0
##   med_high   2       6       14    2
##   high       0       0        0   28
##      low  med_low med_high     high 
##       24       26       24       28

Comments: From the table, we see that the LDA predicts correctly:

  • 16 out of 28 (i.e., 57.1%) low.
  • 21 out of 26 (i.e., 80.8%) med_low.
  • 18 out of 22 (i.e., 81.8%) med_high.
  • All 26 (i.e., 100%) high.

Reload the dataset, standardize and run k-means algorithm

data("Boston") # Reload
Re_data <- scale(Boston) # standardize it

# distances between the observations
dist_eu <- dist(Re_data)

# k-means clustering with 3 clusters
km <- kmeans(Re_data, centers = 3)

# investigate what is the optimal number of clusters 

set.seed(123) # set the seed

# determine the number of clusters
k_max <- 10

# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(Re_data, k)$tot.withinss})

# visualize the results
library(ggplot2)
qplot(x = 1:k_max, y = twcss, geom = 'line')

Comment: The “scree plot” above helps to identify the appropriate number of clusters. The “elbow shape” suggests that two clusters (k = 2) is the potential candidate, since the total WCSS drops radically.

# run the algorithm again

# k-means clustering
km_new <- kmeans(Re_data, centers = 2)

# plot the Re_scale Boston dataset with 2 clusters
pairs(Re_data, col = km_new$cluster)

table(km_new$cluster) 
## 
##   1   2 
## 329 177

Comment: With k = 2, clusters consist of 329 observations out of 506 in cluster 1, and cluster 2, 177 observations. The clusters are also separated by colour within the predictors. Some variables present a clear cut of observations, other it is a quite mix.

Bonus

model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404  13
dim(lda.fit$scaling)
## [1] 13  3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)

library(plotly) # access plotly library

# 3D plot of the columns of the matrix 
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', surfacecolor = train$crime)
# another 3D plot with color defined by the clusters of the k-means
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', surfacecolor = km_new$cluster)

How do the plots differ? Are there any similarities?: The two plots look relatively similar with a clear cut between clusters, and the number of observation within each group seems to be the same.


Dimensionality reduction techniques

This chapter focuses on dimensionality reduction techniques such as principal component analysis (PCA) and Multiple correspondence analysis (MCA).

Read the human data

human <- read.csv("human_data") # Read the data from my local file
dim(human) 
## [1] 155   9
str(human) 
## 'data.frame':    155 obs. of  9 variables:
##  $ X                 : chr  "Norway" "Australia" "Switzerland" "Denmark" ...
##  $ F_education       : num  97.4 94.3 95 95.5 87.7 96.3 80.5 95.1 100 95 ...
##  $ ratio_labour      : num  0.891 0.819 0.825 0.884 0.829 ...
##  $ Life_Expectancy   : num  81.6 82.4 83 80.2 81.6 80.9 80.9 79.1 82 81.8 ...
##  $ Years_Education   : num  17.5 20.2 15.8 18.7 17.9 16.5 18.6 16.5 15.9 19.2 ...
##  $ GNI_per_capita    : int  64992 42261 56431 44025 45435 43919 39568 52947 42155 32689 ...
##  $ Maternal_mortality: int  4 6 6 5 6 7 9 28 11 8 ...
##  $ Adolescent_birth  : num  7.8 12.1 1.9 5.1 6.2 3.8 8.2 31 14.5 25.3 ...
##  $ In_parliament     : num  39.6 30.5 28.5 38 36.9 36.9 19.9 19.4 28.2 31.4 ...

A graphical overview of the data and summary of the variables

library(GGally) # Access the GGally library
library(dplyr) # Access the dplyr library
library(corrplot) # Access the corrplot library

# Remove the "Country" column
human_new <- dplyr::select(human, -X)

# visualize the 'human' variables
ggpairs(human_new, mapping = aes(alpha = 0.3))

# compute the correlation matrix and visualize it with corrplot
cor(human_new)%>%
corrplot(type = "upper")

Interpretations:

  • There seems to be a positive correlation between Proportion of females with at least secondary education (F_education) with Life expectancy at birth (Life_expectancy), and with Expected years of schooling (Years_Education).
  • There seems to be a slightly positive correlation between Proportion of females with at least secondary education (F_education) with Gross National Income (GNI) per Capita (GNI_per_capita).
  • There seems to be a positive correlation between Life expectancy at birth (Life_expectancy) and Expected years of schooling (Years_education). And a slightly one between Life_expectancy and GNI_per_capita.
  • There seems to be a slightly positive correlation one between Years_education and GNI_per_capita.
  • A slightly positive correlation between Maternal Mortality Ratio (Maternity_mortality) and Adolescent Birth Rate (Adolescent_birth).
  • A negative correlation is observed between Years_Education with Maternity_mortality and Adolescent_birth. Between Life_expectancy with Maternity_mortality and Adolescent_birth. Between F_education with Maternity_mortality and Adolescent_birth.
  • Years_Education and Percentage of female representatives in parliament (In_parliament) variables seem to be normally distributed.
  • F_labour / M_labour (ratio_labour) and Life_Expectancy seem to have more negative values; it is reflected in the left tail.
  • GNI_per_capita, Maternal_mortality, and Adolescent_birth seem to have more positive values; it is reflected in the right tail.
summary(human_new) # summary of variables
##   F_education      ratio_labour    Life_Expectancy Years_Education
##  Min.   :  0.90   Min.   :0.1857   Min.   :49.00   Min.   : 5.40  
##  1st Qu.: 27.15   1st Qu.:0.5984   1st Qu.:66.30   1st Qu.:11.25  
##  Median : 56.60   Median :0.7535   Median :74.20   Median :13.50  
##  Mean   : 55.37   Mean   :0.7074   Mean   :71.65   Mean   :13.18  
##  3rd Qu.: 85.15   3rd Qu.:0.8535   3rd Qu.:77.25   3rd Qu.:15.20  
##  Max.   :100.00   Max.   :1.0380   Max.   :83.50   Max.   :20.20  
##  GNI_per_capita   Maternal_mortality Adolescent_birth In_parliament  
##  Min.   :   581   Min.   :   1.0     Min.   :  0.60   Min.   : 0.00  
##  1st Qu.:  4198   1st Qu.:  11.5     1st Qu.: 12.65   1st Qu.:12.40  
##  Median : 12040   Median :  49.0     Median : 33.60   Median :19.30  
##  Mean   : 17628   Mean   : 149.1     Mean   : 47.16   Mean   :20.91  
##  3rd Qu.: 24512   3rd Qu.: 190.0     3rd Qu.: 71.95   3rd Qu.:27.95  
##  Max.   :123124   Max.   :1100.0     Max.   :204.80   Max.   :57.50

Interpretations: On average,

  • The proportion of females with at least secondary education is about 55.37.
  • The ratio between F_education and M_education is about 0.71.
  • Life expectancy at birth is approximately 72 years.
  • The expected years of schooling is approximately 13 years.
  • The Gross National Income (GNI) per Capita is about 17628.
  • The Maternal Mortality Ratio is about 149.1.
  • The Adolescent Birth Rate is about 47.2.
  • The Percentage of female representatives in parliament is about 20.91.

Perform principal component analysis (PCA) – Non standarized data

# perform principal component analysis
pca_human_new <- prcomp(human_new)
summary(pca_human_new)
## Importance of components:
##                              PC1      PC2   PC3   PC4   PC5   PC6   PC7    PC8
## Standard deviation     1.854e+04 186.1920 25.97 19.25 11.42 3.723 1.431 0.1649
## Proportion of Variance 9.999e-01   0.0001  0.00  0.00  0.00 0.000 0.000 0.0000
## Cumulative Proportion  9.999e-01   1.0000  1.00  1.00  1.00 1.000 1.000 1.0000
# draw a biplot of the principal component 
biplot(pca_human_new, choices = 1:2, col = c("blue", "red"))
## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped

## Warning in arrows(0, 0, y[, 1L] * 0.8, y[, 2L] * 0.8, col = col[2L], length =
## arrow.len): zero-length arrow is of indeterminate angle and so skipped
Figure 1: CPA -- Non standarized data

Figure 1: CPA – Non standarized data

Perform principal component analysis (PCA) – Standarized data

# standardize the variables
human_std <- scale(human_new)

# perform principal component analysis
pca_human_std <- prcomp(human_std)
summary(pca_human_std)
## Importance of components:
##                           PC1    PC2     PC3     PC4     PC5     PC6     PC7
## Standard deviation     2.1194 1.1478 0.89070 0.73763 0.55201 0.48552 0.44894
## Proportion of Variance 0.5615 0.1647 0.09917 0.06801 0.03809 0.02947 0.02519
## Cumulative Proportion  0.5615 0.7261 0.82532 0.89333 0.93142 0.96089 0.98608
##                            PC8
## Standard deviation     0.33372
## Proportion of Variance 0.01392
## Cumulative Proportion  1.00000
# draw a biplot of the principal component 
biplot(pca_human_std, choices = 1:2, col = c("grey40", "deeppink2"))
Figure 2: CPA -- Standarized data

Figure 2: CPA – Standarized data

Interpretations:

  • With and without standardizing, are the results different? Yes, with and without standardizing, results are very much different.

  • How and Why? In Figure 1 – PCA with no-standardized data, we can not really grasp the variability captured by the principal components. This is because PCA is sensitive to the relative scaling of the original features and assumes that features with larger variance are more important than features with smaller variance. That is probably the reason why the GNI_per_capita has a larger arrow. Also, 99% of variation explained by 1 PCA component.

  • Figure 2, on the other hand, displays how Standardization of the features before PCA is a crucial step. The PCA decomposes data into a product of smaller components and reveals the most important features. 98% of variation explained by 7 PCA components, as presented in the Cumulative Proportion. The 1st principal component which captures the maximum amount of variance from the features in the original data counts for 56%.

Personal interpretations of PCA

  • The dimensionality of human data is reduced to two principal components (PC). The first PC captures more than 56% of the total variance, while the second PC captures 72%. This gives the uncorrelated variables which capture the maximum amount of variation in the data.

From the biplot, we can observe the following connections:

  • The angle between arrows = the correlation between the features. Small-angle = high positive correlation. There is a high positive correlation between, for instance, ratio_labour and In_parliament, Maternity_mortality and Adorescent_birth.

  • The angle between a feature and a PC axis = the correlation between the two. Small-angle = high positive correlation. For instance, the following variables are positively correlated with PC1: Maternity_mortality, Adorescent_birth, Years_education, GNI_per_capita, and F_education.

  • The length of the arrows is proportional to the standard deviations of the features. All variables seem to have quite the same standard deviation, except the In_parliament with a short arrow.

Load the tea dataset for MCA analysis

library(FactoMineR)
library(ggplot2)
library(tidyr)

data(tea)
dim(tea)
## [1] 300  36
str(tea)
## 'data.frame':    300 obs. of  36 variables:
##  $ breakfast       : Factor w/ 2 levels "breakfast","Not.breakfast": 1 1 2 2 1 2 1 2 1 1 ...
##  $ tea.time        : Factor w/ 2 levels "Not.tea time",..: 1 1 2 1 1 1 2 2 2 1 ...
##  $ evening         : Factor w/ 2 levels "evening","Not.evening": 2 2 1 2 1 2 2 1 2 1 ...
##  $ lunch           : Factor w/ 2 levels "lunch","Not.lunch": 2 2 2 2 2 2 2 2 2 2 ...
##  $ dinner          : Factor w/ 2 levels "dinner","Not.dinner": 2 2 1 1 2 1 2 2 2 2 ...
##  $ always          : Factor w/ 2 levels "always","Not.always": 2 2 2 2 1 2 2 2 2 2 ...
##  $ home            : Factor w/ 2 levels "home","Not.home": 1 1 1 1 1 1 1 1 1 1 ...
##  $ work            : Factor w/ 2 levels "Not.work","work": 1 1 2 1 1 1 1 1 1 1 ...
##  $ tearoom         : Factor w/ 2 levels "Not.tearoom",..: 1 1 1 1 1 1 1 1 1 2 ...
##  $ friends         : Factor w/ 2 levels "friends","Not.friends": 2 2 1 2 2 2 1 2 2 2 ...
##  $ resto           : Factor w/ 2 levels "Not.resto","resto": 1 1 2 1 1 1 1 1 1 1 ...
##  $ pub             : Factor w/ 2 levels "Not.pub","pub": 1 1 1 1 1 1 1 1 1 1 ...
##  $ Tea             : Factor w/ 3 levels "black","Earl Grey",..: 1 1 2 2 2 2 2 1 2 1 ...
##  $ How             : Factor w/ 4 levels "alone","lemon",..: 1 3 1 1 1 1 1 3 3 1 ...
##  $ sugar           : Factor w/ 2 levels "No.sugar","sugar": 2 1 1 2 1 1 1 1 1 1 ...
##  $ how             : Factor w/ 3 levels "tea bag","tea bag+unpackaged",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ where           : Factor w/ 3 levels "chain store",..: 1 1 1 1 1 1 1 1 2 2 ...
##  $ price           : Factor w/ 6 levels "p_branded","p_cheap",..: 4 6 6 6 6 3 6 6 5 5 ...
##  $ age             : int  39 45 47 23 48 21 37 36 40 37 ...
##  $ sex             : Factor w/ 2 levels "F","M": 2 1 1 2 2 2 2 1 2 2 ...
##  $ SPC             : Factor w/ 7 levels "employee","middle",..: 2 2 4 6 1 6 5 2 5 5 ...
##  $ Sport           : Factor w/ 2 levels "Not.sportsman",..: 2 2 2 1 2 2 2 2 2 1 ...
##  $ age_Q           : Factor w/ 5 levels "15-24","25-34",..: 3 4 4 1 4 1 3 3 3 3 ...
##  $ frequency       : Factor w/ 4 levels "1/day","1 to 2/week",..: 1 1 3 1 3 1 4 2 3 3 ...
##  $ escape.exoticism: Factor w/ 2 levels "escape-exoticism",..: 2 1 2 1 1 2 2 2 2 2 ...
##  $ spirituality    : Factor w/ 2 levels "Not.spirituality",..: 1 1 1 2 2 1 1 1 1 1 ...
##  $ healthy         : Factor w/ 2 levels "healthy","Not.healthy": 1 1 1 1 2 1 1 1 2 1 ...
##  $ diuretic        : Factor w/ 2 levels "diuretic","Not.diuretic": 2 1 1 2 1 2 2 2 2 1 ...
##  $ friendliness    : Factor w/ 2 levels "friendliness",..: 2 2 1 2 1 2 2 1 2 1 ...
##  $ iron.absorption : Factor w/ 2 levels "iron absorption",..: 2 2 2 2 2 2 2 2 2 2 ...
##  $ feminine        : Factor w/ 2 levels "feminine","Not.feminine": 2 2 2 2 2 2 2 1 2 2 ...
##  $ sophisticated   : Factor w/ 2 levels "Not.sophisticated",..: 1 1 1 2 1 1 1 2 2 1 ...
##  $ slimming        : Factor w/ 2 levels "No.slimming",..: 1 1 1 1 1 1 1 1 1 1 ...
##  $ exciting        : Factor w/ 2 levels "exciting","No.exciting": 2 1 2 2 2 2 2 2 2 2 ...
##  $ relaxing        : Factor w/ 2 levels "No.relaxing",..: 1 1 2 2 2 2 2 2 2 2 ...
##  $ effect.on.health: Factor w/ 2 levels "effect on health",..: 2 2 2 2 2 2 2 2 2 2 ...
# select the 'keep_columns' to create a new dataset to visualize 

# column names to keep in the dataset
keep_columns <- c("Tea", "How", "how", "sugar", "where", "lunch")

tea_time <- select(tea, one_of(keep_columns))


# visualize the dataset
gather(tea_time) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar() + theme(axis.text.x = element_text(angle = 45, hjust = 1, size = 8))
## Warning: attributes are not identical across measure variables;
## they will be dropped

# Multiple Correspondence Analysis to a certain columns of the data
mca <- MCA(tea_time, graph = FALSE)

# summary of the model
summary(mca)
## 
## Call:
## MCA(X = tea_time, graph = FALSE) 
## 
## 
## Eigenvalues
##                        Dim.1   Dim.2   Dim.3   Dim.4   Dim.5   Dim.6   Dim.7
## Variance               0.279   0.261   0.219   0.189   0.177   0.156   0.144
## % of var.             15.238  14.232  11.964  10.333   9.667   8.519   7.841
## Cumulative % of var.  15.238  29.471  41.435  51.768  61.434  69.953  77.794
##                        Dim.8   Dim.9  Dim.10  Dim.11
## Variance               0.141   0.117   0.087   0.062
## % of var.              7.705   6.392   4.724   3.385
## Cumulative % of var.  85.500  91.891  96.615 100.000
## 
## Individuals (the 10 first)
##                       Dim.1    ctr   cos2    Dim.2    ctr   cos2    Dim.3
## 1                  | -0.298  0.106  0.086 | -0.328  0.137  0.105 | -0.327
## 2                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 3                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 4                  | -0.530  0.335  0.460 | -0.318  0.129  0.166 |  0.211
## 5                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 6                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 7                  | -0.369  0.162  0.231 | -0.300  0.115  0.153 | -0.202
## 8                  | -0.237  0.067  0.036 | -0.136  0.024  0.012 | -0.695
## 9                  |  0.143  0.024  0.012 |  0.871  0.969  0.435 | -0.067
## 10                 |  0.476  0.271  0.140 |  0.687  0.604  0.291 | -0.650
##                       ctr   cos2  
## 1                   0.163  0.104 |
## 2                   0.735  0.314 |
## 3                   0.062  0.069 |
## 4                   0.068  0.073 |
## 5                   0.062  0.069 |
## 6                   0.062  0.069 |
## 7                   0.062  0.069 |
## 8                   0.735  0.314 |
## 9                   0.007  0.003 |
## 10                  0.643  0.261 |
## 
## Categories (the 10 first)
##                        Dim.1     ctr    cos2  v.test     Dim.2     ctr    cos2
## black              |   0.473   3.288   0.073   4.677 |   0.094   0.139   0.003
## Earl Grey          |  -0.264   2.680   0.126  -6.137 |   0.123   0.626   0.027
## green              |   0.486   1.547   0.029   2.952 |  -0.933   6.111   0.107
## alone              |  -0.018   0.012   0.001  -0.418 |  -0.262   2.841   0.127
## lemon              |   0.669   2.938   0.055   4.068 |   0.531   1.979   0.035
## milk               |  -0.337   1.420   0.030  -3.002 |   0.272   0.990   0.020
## other              |   0.288   0.148   0.003   0.876 |   1.820   6.347   0.102
## tea bag            |  -0.608  12.499   0.483 -12.023 |  -0.351   4.459   0.161
## tea bag+unpackaged |   0.350   2.289   0.056   4.088 |   1.024  20.968   0.478
## unpackaged         |   1.958  27.432   0.523  12.499 |  -1.015   7.898   0.141
##                     v.test     Dim.3     ctr    cos2  v.test  
## black                0.929 |  -1.081  21.888   0.382 -10.692 |
## Earl Grey            2.867 |   0.433   9.160   0.338  10.053 |
## green               -5.669 |  -0.108   0.098   0.001  -0.659 |
## alone               -6.164 |  -0.113   0.627   0.024  -2.655 |
## lemon                3.226 |   1.329  14.771   0.218   8.081 |
## milk                 2.422 |   0.013   0.003   0.000   0.116 |
## other                5.534 |  -2.524  14.526   0.197  -7.676 |
## tea bag             -6.941 |  -0.065   0.183   0.006  -1.287 |
## tea bag+unpackaged  11.956 |   0.019   0.009   0.000   0.226 |
## unpackaged          -6.482 |   0.257   0.602   0.009   1.640 |
## 
## Categorical variables (eta2)
##                      Dim.1 Dim.2 Dim.3  
## Tea                | 0.126 0.108 0.410 |
## How                | 0.076 0.190 0.394 |
## how                | 0.708 0.522 0.010 |
## sugar              | 0.065 0.001 0.336 |
## where              | 0.702 0.681 0.055 |
## lunch              | 0.000 0.064 0.111 |

Visualization and interpretation of MCA

We will use the factoextra R package to help in the interpretation and the visualization of the multiple correspondence analysis.

  • Eigenvalues / Variances

These are the variances and the percentage of variances retained by each dimension. This proportion of variances retained by the different dimensions (axes) can be extracted using the function get_eigenvalue() as follow:

library(factoextra) # Access the factoextra library

eig_val <- get_eigenvalue(mca)
head(eig_val)
##       eigenvalue variance.percent cumulative.variance.percent
## Dim.1  0.2793712        15.238428                    15.23843
## Dim.2  0.2609265        14.232352                    29.47078
## Dim.3  0.2193358        11.963768                    41.43455
## Dim.4  0.1894379        10.332978                    51.76753
## Dim.5  0.1772231         9.666715                    61.43424
## Dim.6  0.1561774         8.518770                    69.95301

To visualize the percentages of inertia explained by each MCA dimensions, use the function fviz_eig() or fviz_screeplot().

fviz_screeplot(mca, addlabels = TRUE, ylim = c(0, 45))

  • Biplot

The plot below shows a global pattern within the data. The function fviz_mca_biplot() can also be used to draw the biplot of individuals and variable categories. Here, we use the standard plot() function.

# visualize MCA
plot(mca, invisible=c("ind"), habillage = "quali")

Comments: The distance between variable categories gives a measure of their similarity. For example, tea bag and chain store are more similar than black and lemon, and green is different from all the other categories.

  • Correlation between variables and principal dimensions

To visualize the correlation between variables and MCA principal dimensions, we use:

fviz_mca_var(mca, choice = "mca.cor", 
            repel = TRUE, # Avoid text overlapping (slow)
            ggtheme = theme_minimal())

Comments:

  • The plot above helps to identify variables that are the most correlated with each dimension. The squared correlations between variables and the dimensions are used as coordinates.
  • It can be seen that, the variable sugar is the most correlated with dimension 1. Similarly, the variable lunch is the most correlated with dimension 2.

Analysis of longitudinal data

This chapter focuses on analysis of longitudinal data. These data refer to repeated measures. For example, the response variable may be measured under a number of different experimental conditions or on a number of different occasions over time.

Read the BPRS and RATS data sets

library(dplyr)
BPRSL <- read.csv("BPRSL") # Read the data from my local file
glimpse(BPRSL) 
## Rows: 360
## Columns: 6
## $ X         <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17...
## $ treatment <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ subject   <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17...
## $ weeks     <chr> "week0", "week0", "week0", "week0", "week0", "week0", "we...
## $ bprs      <int> 42, 58, 54, 55, 72, 48, 71, 30, 41, 57, 30, 55, 36, 38, 6...
## $ week      <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...

Comment: Variables “treatment” and “subject” must be changed to factors.

RATSL <- read.csv("RATSL") # Read the data from my local file
glimpse(RATSL) 
## Rows: 176
## Columns: 6
## $ X      <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 1...
## $ ID     <int> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2,...
## $ Group  <int> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, 1, ...
## $ WD     <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1...
## $ Weight <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 555, ...
## $ Time   <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, 8, ...

Comment: Variables “ID” and “Group” must be changed to factors.

# Changes for BPRSL data set
BPRSL$treatment <- factor(BPRSL$treatment)
BPRSL$subject <- factor(BPRSL$subject)
BPRSL <- select(BPRSL, select = -X) # Remove the rownames
glimpse(BPRSL)
## Rows: 360
## Columns: 5
## $ treatment <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ subject   <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17...
## $ weeks     <chr> "week0", "week0", "week0", "week0", "week0", "week0", "we...
## $ bprs      <int> 42, 58, 54, 55, 72, 48, 71, 30, 41, 57, 30, 55, 36, 38, 6...
## $ week      <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
str(BPRSL)
## 'data.frame':    360 obs. of  5 variables:
##  $ treatment: Factor w/ 2 levels "1","2": 1 1 1 1 1 1 1 1 1 1 ...
##  $ subject  : Factor w/ 20 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ weeks    : chr  "week0" "week0" "week0" "week0" ...
##  $ bprs     : int  42 58 54 55 72 48 71 30 41 57 ...
##  $ week     : int  0 0 0 0 0 0 0 0 0 0 ...
# Changes for RATSL data set
RATSL$ID <- factor(RATSL$ID)
RATSL$Group <- factor(RATSL$Group)
RATSL <- select(RATSL, select = -X) # Remove the rownames
glimpse(RATSL)
## Rows: 176
## Columns: 5
## $ ID     <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1, 2,...
## $ Group  <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, 1, ...
## $ WD     <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1...
## $ Weight <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 555, ...
## $ Time   <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, 8, ...
str(RATSL)
## 'data.frame':    176 obs. of  5 variables:
##  $ ID    : Factor w/ 16 levels "1","2","3","4",..: 1 2 3 4 5 6 7 8 9 10 ...
##  $ Group : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 2 2 ...
##  $ WD    : chr  "WD1" "WD1" "WD1" "WD1" ...
##  $ Weight: int  240 225 245 260 255 260 275 245 410 405 ...
##  $ Time  : int  1 1 1 1 1 1 1 1 1 1 ...

Comment: Both data sets are now ready for the analysis.

Analysis of Chapter 8 of MABS using the RATS data set

Rats weights plot differentiating between groups.

library(ggplot2)
ggplot(RATSL, aes(x = Time, y = Weight, linetype = ID)) +
  geom_line() + scale_linetype_manual(values = rep(1:4, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  theme_bw() + theme(legend.position = "none") +
  scale_y_continuous(limits = c(min(RATSL$Weight), max(RATSL$Weight)),
                     name = "Weight (grams)") +
  scale_x_continuous(name = "Time (days)")

Interpretations: There is a significant difference between the weights of the group 1 rats and those in the other two groups. Group 1 rats have less weights, while group 2 and 3 rats have higher weights with a rat in group 2 reaching 600 grams. Among a total of 16 rats, group 1 has half of it, whereas group 2 and 3 split the other half.

Standarize the rats weights

# Standardize the variable Weights
RATSL <- RATSL %>%
  group_by(Time) %>%
  mutate(stdWeight = (Weight - mean(Weight)) / sd(Weight)) %>%
  ungroup()

# Glimpse the data
glimpse(RATSL)
## Rows: 176
## Columns: 6
## $ ID        <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 1,...
## $ Group     <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 1, 1, 1, ...
## $ WD        <chr> "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "WD1", "...
## $ Weight    <int> 240, 225, 245, 260, 255, 260, 275, 245, 410, 405, 445, 55...
## $ Time      <int> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 8, 8, 8, ...
## $ stdWeight <dbl> -1.0011429, -1.1203857, -0.9613953, -0.8421525, -0.881900...
# Plot again with the standardized Weight
ggplot(RATSL, aes(x = Time, y = stdWeight, linetype = ID)) +
  geom_line() +
  scale_linetype_manual(values = rep(1:4, times=4)) +
  facet_grid(. ~ Group, labeller = label_both) +
  theme_bw() + theme(legend.position = "none") +
  theme(panel.grid.minor.y = element_blank()) + 
  scale_y_continuous(name = "standardized Weight")

Interpretations: The rats’ weights have been standardized with mean 0 and standard deviation 1. The plot shows the tracking phenomena; it refers to the effect where rats which have higher weights at the beginning tend to have higher values throughout the study.

The average (mean) profiles for each rats group

# Number of Days
n <- RATSL$Time %>% unique() %>% length()

# Summary data with mean and standard error of Weight by Group and day 
RATSS <- RATSL %>%
  group_by(Group, Time) %>%
  summarise( mean = mean(Weight), se = sd(Weight) / sqrt(n)) %>%
  ungroup()
## `summarise()` regrouping output by 'Group' (override with `.groups` argument)
# Glimpse the data
glimpse(RATSS)
## Rows: 33
## Columns: 4
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2...
## $ Time  <int> 1, 8, 15, 22, 29, 36, 43, 44, 50, 57, 64, 1, 8, 15, 22, 29, 3...
## $ mean  <dbl> 250.625, 255.000, 254.375, 261.875, 264.625, 265.000, 267.375...
## $ se    <dbl> 4.589478, 3.947710, 3.460116, 4.100800, 3.333956, 3.552939, 3...
# Plot the mean profiles
ggplot(RATSS, aes(x = Time, y = mean, linetype = Group, shape = Group)) + geom_line() +
  scale_linetype_manual(values = c(1,2,3)) + geom_point(size=3) +
  scale_shape_manual(values = c(1,2,3)) +
  geom_errorbar(aes(ymin=mean-se, ymax=mean+se, linetype="1"),    width=0.3) +  theme(legend.position = c(0.8,0.8)) +
  theme_bw() + theme(panel.grid.major = element_blank(),
                     panel.grid.minor = element_blank()) +
  scale_y_continuous(name = "mean(Weight) +/- se(Weight)")

Interpretations: There is no overlap in the mean profiles of the three rats groups suggesting that there might be no difference between the three groups in respect to the mean Weight values. Also, the exception in week 7 is noted when two recordings of body weight were taken.

Dairly Rats weight per group – use of Boxplot

ggplot(RATSL, aes(x = factor(Time), y = Weight, fill = Group)) +
  geom_boxplot(position = position_dodge(width = 0.9)) +
  theme_bw() + theme(panel.grid.major = element_blank(),
                     panel.grid.minor = element_blank()) +
  scale_x_discrete(name = "days")

Interpretations: Quite the same daily recorded rats weights within each group, with a slight increase in group 2 during the last days of the experiment.

Rats weight values per group

# Create a summary data by Group and ID with mean as the summary variable.
RATSL8S <- RATSL %>%
  group_by(Group, ID) %>%
  summarise( mean = mean(Weight)) %>%
  ungroup()

# Glimpse the data
glimpse(RATSL8S)
## Rows: 16
## Columns: 3
## $ Group <fct> 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3
## $ ID    <fct> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16
## $ mean  <dbl> 261.0909, 237.6364, 260.1818, 266.5455, 269.4545, 274.7273, 2...
# Draw a boxplot of the mean versus Group
ggplot(RATSL8S, aes(x = Group, y = mean)) +
  geom_boxplot() +
  stat_summary(fun = "mean", geom = "point", shape = 23, size = 2, fill = "white") +
  theme_bw() + theme(panel.grid.major = element_blank(),
                     panel.grid.minor = element_blank()) +
  scale_y_continuous(name = "mean(Weight), days 1-64")

Interpretations: Low mean weight for rats in group 1, followed by rats in group 2. Group 3 seems to have rats with a higher mean weight. Also, each group has one outlier.

# Remove the outliers per group

RATSL8S1 <- RATSL8S  %>%
  filter(ID != 2 & ID != 12 & ID != 13)

# Draw a boxplot of the mean versus Group
ggplot(RATSL8S1, aes(x = Group, y = mean)) +
  geom_boxplot() +
  stat_summary(fun = "mean", geom = "point", shape = 23, size = 2, fill = "white") +
  theme_bw() + theme(panel.grid.major = element_blank(),
                     panel.grid.minor = element_blank()) +
  scale_y_continuous(name = "mean(Weight), days 1-64")

Interpretations: All outliers have been removed.

Apply Student t-test or ANOVA?

To compare three groups or more, an ANOVA should be performed.

# Perform ANOVA
oneway.test(mean ~ Group, data = RATSL8S1, var.equal = TRUE)
## 
##  One-way analysis of means
## 
## data:  mean and Group
## F = 2577.4, num df = 2, denom df = 10, p-value = 2.721e-14
# Same as fit the linear model with the mean as the response 
fit <- lm(mean ~ Group, data = RATSL8S1)

# Compute the analysis of variance table for the fitted model with anova()
anova(fit)
## Analysis of Variance Table
## 
## Response: mean
##           Df Sum Sq Mean Sq F value    Pr(>F)    
## Group      2 175958   87979  2577.4 2.721e-14 ***
## Residuals 10    341      34                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Interpretations: The ANOVA test assesses any difference between the rats’ weight groups. With a small p-value (p-value = 2.721e-14), the test suggests the evidence of a group weight difference. In other words, rats weights per groups are significantly different.

Analysis of Chapter 9 of MABS using the BPRS data set

BPRS values for all 40 men, differentiating between the treatment groups

ggplot(BPRSL, aes(x = week, y = bprs, group = subject)) +
  geom_text(aes(label = treatment)) + 
 scale_x_continuous(name = "week", breaks = seq(0, 8, 2)) +
scale_y_continuous(name = "bprs") + 
  theme_bw() + theme(panel.grid.major = element_blank(),
                     panel.grid.minor = element_blank())

Interpretations: The plot illustrates the bprs of all 40 men against time, ignoring the repeated-measures structure of the data but identifying the group to which each observation belongs. Throughout the 8 weeks, there is a random distribution of observations in the two groups.

Fit a multiple linear regression model

The bprs as the response and week and treatment as explanatory variables.

# create a regression model RATS_reg
BPRSL_reg <- lm(bprs ~ week + treatment, data = BPRSL)

# print out a summary of the model
summary(BPRSL_reg)
## 
## Call:
## lm(formula = bprs ~ week + treatment, data = BPRSL)
## 
## Residuals:
##     Min      1Q  Median      3Q     Max 
## -22.454  -8.965  -3.196   7.002  50.244 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)  46.4539     1.3670  33.982   <2e-16 ***
## week         -2.2704     0.2524  -8.995   <2e-16 ***
## treatment2    0.5722     1.3034   0.439    0.661    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 12.37 on 357 degrees of freedom
## Multiple R-squared:  0.1851, Adjusted R-squared:  0.1806 
## F-statistic: 40.55 on 2 and 357 DF,  p-value: < 2.2e-16

Interpretations: Results from Fitting a Linear Regression Model to BPRS Data with bprs as Response Variable, and treatment and week as Explanatory Variables, and Ignoring the Repeated-Measures Structure of the Data. The baseline is treatment group 1 conditional on the week, and the estimate of treatment group 2 would be 46.4539 + 0.5722 = 47.0261; highlighting a sightly difference between group treatments. However, treatment 2 is not statistically significant. Also, the significance of the regression on week is observed.

Plot of individual man bprs profiles.

ggplot(BPRSL, aes(x = week, y = bprs, color = interaction(subject, treatment))) + geom_line()  + geom_point() +
scale_x_continuous(name = "week") + scale_y_continuous(name = "bprs") + theme_bw() + theme(panel.grid.major = element_blank(),
                     panel.grid.minor = element_blank()) 

Interpretations: The plot displays the men’s bprs growth data that takes into account the longitudinal structure of the data by joining together the points belonging to each man to show the bprs growth profiles of individual men in each group treatment. There are 20 men in group treatment 1 as well as in group treatment 2.

Scatterplot matrix of repeated measures in BPRS growth data.

# Use of the BPRS data set before the transformation

BPRS <- read.table(file = 
"https://raw.githubusercontent.com/KimmoVehkalahti/MABS/master/Examples/data/BPRS.txt", sep = " ", header = TRUE)
str(BPRS)
## 'data.frame':    40 obs. of  11 variables:
##  $ treatment: int  1 1 1 1 1 1 1 1 1 1 ...
##  $ subject  : int  1 2 3 4 5 6 7 8 9 10 ...
##  $ week0    : int  42 58 54 55 72 48 71 30 41 57 ...
##  $ week1    : int  36 68 55 77 75 43 61 36 43 51 ...
##  $ week2    : int  36 61 41 49 72 41 47 38 39 51 ...
##  $ week3    : int  43 55 38 54 65 38 30 38 35 55 ...
##  $ week4    : int  41 43 43 56 50 36 27 31 28 53 ...
##  $ week5    : int  40 34 28 50 39 29 40 26 22 43 ...
##  $ week6    : int  38 28 29 47 32 33 30 26 20 43 ...
##  $ week7    : int  47 28 25 42 38 27 31 25 23 39 ...
##  $ week8    : int  51 28 24 46 32 25 31 24 21 32 ...
pairs(BPRS[, 3:11], cex = 0.7)

Interpretations: The scatterplot matrix of the repeated measures of bprs does demonstrate that the repeated measurements are certainly not independent of one another.

Fit the random intercept model

Fitting a random intercept model allows the linear regression fit for each man to differ in intercept from other men. bprs is the response, and explanatory variables are week and treatment.

library(lme4)

# Create a random intercept model
BPRS_ref <- lmer(bprs ~ week + treatment + (1 | subject), data = BPRSL, REML = FALSE)

# Print the summary of the model
summary(BPRS_ref)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (1 | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2748.7   2768.1  -1369.4   2738.7      355 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0481 -0.6749 -0.1361  0.4813  3.4855 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  subject  (Intercept)  47.41    6.885  
##  Residual             104.21   10.208  
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     1.9090  24.334
## week         -2.2704     0.2084 -10.896
## treatment2    0.5722     1.0761   0.532
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.437       
## treatment2 -0.282  0.000

Interpretations:

  • The estimated variance of the men random effects is not quite large, indicating that maybe the variation in the intercepts of the regression fits of the individual men bprs profiles is not considerable.

  • The estimated regression parameters for the week and the dummy variable are very similar to those from fitting the independence model (multiple linear regression).

  • However, the estimated standard error of week is much smaller in the random intercept model than it is in the linear model. It reflects that assuming independence will lead to the standard error of a within-subject covariate such as week being larger than it should be because of ignoring the likely within-subject dependencies, which will reduce the error variance in the model.

  • In contrast, the standard errors of the dummy variable (treatment) in the random intercept model are about three times the size of those in the linear model. The dummy variable is between-subject effects, and the reason for the smaller standard errors with the independence model is that the effective sample size for estimating these effects is less than the actual sample size because of the correlated nature of the data, and so the estimates for the independence model are unrealistically precise.

Fit the random intercept and random slope model

Fitting a random intercept and random slope model allows the linear regression fits for each individual to differ in intercept but also in slope. This way it is possible to account for the individual differences in the rats’ growth profiles, but also the effect of time.

# create a random intercept and random slope model
BPRS_ref1 <- lmer(bprs ~ week + treatment + (week | subject), data = BPRSL, REML = FALSE)

# print a summary of the model
summary(BPRS_ref1)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ week + treatment + (week | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2745.4   2772.6  -1365.7   2731.4      353 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -2.8919 -0.6194 -0.0691  0.5531  3.7976 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 64.8222  8.0512        
##           week         0.9609  0.9802   -0.51
##  Residual             97.4305  9.8707        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  46.4539     2.1052  22.066
## week         -2.2704     0.2977  -7.626
## treatment2    0.5722     1.0405   0.550
## 
## Correlation of Fixed Effects:
##            (Intr) week  
## week       -0.582       
## treatment2 -0.247  0.000
# perform an ANOVA test on the two models
anova(BPRS_ref1, BPRS_ref)
## Data: BPRSL
## Models:
## BPRS_ref: bprs ~ week + treatment + (1 | subject)
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
##           npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)  
## BPRS_ref     5 2748.7 2768.1 -1369.4   2738.7                       
## BPRS_ref1    7 2745.4 2772.6 -1365.7   2731.4 7.2721  2    0.02636 *
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Interpretations:

  • The results for the fixed effects are very similar to those in Random Intercept Model.

  • The likelihood ratio test for the random intercept model versus the random intercept and slope model gives a chi-squared statistic of 7.2721 with 2 degrees of freedom, and the associated p-value is very small.

  • The random intercept and slope model provides a better fit for these data as it has the small AIC.

Fit Random Intercept and Random Slope Model with interaction

Fit a random intercept and slope model that allows for a treatment × week interaction.

# create a random intercept and random slope model with the interaction
BPRS_ref2 <- lmer(bprs ~ + week * treatment + (week | subject ), data = BPRSL, REML = FALSE)

# print a summary of the model
summary(BPRS_ref2)
## Linear mixed model fit by maximum likelihood  ['lmerMod']
## Formula: bprs ~ +week * treatment + (week | subject)
##    Data: BPRSL
## 
##      AIC      BIC   logLik deviance df.resid 
##   2744.3   2775.4  -1364.1   2728.3      352 
## 
## Scaled residuals: 
##     Min      1Q  Median      3Q     Max 
## -3.0512 -0.6271 -0.0768  0.5288  3.9260 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev. Corr 
##  subject  (Intercept) 64.9964  8.0620        
##           week         0.9687  0.9842   -0.51
##  Residual             96.4707  9.8220        
## Number of obs: 360, groups:  subject, 20
## 
## Fixed effects:
##                 Estimate Std. Error t value
## (Intercept)      47.8856     2.2521  21.262
## week             -2.6283     0.3589  -7.323
## treatment2       -2.2911     1.9090  -1.200
## week:treatment2   0.7158     0.4010   1.785
## 
## Correlation of Fixed Effects:
##             (Intr) week   trtmn2
## week        -0.650              
## treatment2  -0.424  0.469       
## wek:trtmnt2  0.356 -0.559 -0.840
# perform an ANOVA test on the two models
anova(BPRS_ref2, BPRS_ref1)
## Data: BPRSL
## Models:
## BPRS_ref1: bprs ~ week + treatment + (week | subject)
## BPRS_ref2: bprs ~ +week * treatment + (week | subject)
##           npar    AIC    BIC  logLik deviance  Chisq Df Pr(>Chisq)  
## BPRS_ref1    7 2745.4 2772.6 -1365.7   2731.4                       
## BPRS_ref2    8 2744.3 2775.4 -1364.1   2728.3 3.1712  1    0.07495 .
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Interpretations:

  • The likelihood ratio test of the interaction random intercept and slope model against the corresponding model without interaction is 3.1712 with 1 DF; the associated p-value is small.

  • With the smallest AIC, we can conclude that the interaction model provides a better fit for the men’s bprs data.

  • The estimated regression parameters for the interaction indicate that the bprs slopes are considerably higher for men in treatment group 2 than for men in group 1 (on average, 0.71 higher).

We can find the fitted values from the interaction model and plot the fitted bprs values for each man; these are shown in the figures below alongside the observed values.

# Create a vector of the fitted values
Fitted <- fitted(BPRS_ref2)

# Create a new column fitted to RATSL
BPRSL <- BPRSL %>%
mutate(Fitted)

# draw the plot of BPRSL with the observed bprs values
p1 <- ggplot(BPRSL, aes(x = week, y = bprs, color = interaction(subject, treatment))) 
p2 <- p1 + geom_line()  + geom_point()
p3 <- p2 + scale_x_continuous(name = "week")
p4 <- p3 + scale_y_continuous(name = "bprs")
p5 <- p4 + theme_bw() + theme(legend.position = "none")
p6 <- p5 + theme(panel.grid.major = element_blank(),
                     panel.grid.minor = element_blank())
p7 <- p6 + ggtitle("Observed")
graph1 <- p7  

# draw the plot of BPRSL with the fitted bprs values
p1 <- ggplot(BPRSL, aes(x = week, y = Fitted, color = interaction(subject, treatment))) 
p2 <- p1 + geom_line()  + geom_point()
p3 <- p2 + scale_x_continuous(name = "week")
p4 <- p3 + scale_y_continuous(name = "bprs")
p5 <- p4 + theme_bw() + theme(legend.position = "none")
p6 <- p5 + theme(panel.grid.major = element_blank(),
                     panel.grid.minor = element_blank())
p7 <- p6 + ggtitle("Fitted")
graph2 <- p7  

graph1; graph2

Interpretations: The two figures illustrate the fitted bprs profiles from the interaction model and observed bprs profiles. The below graphic underlines how well the interaction model fits the observed data.